Chapter 4 Partial Differential Equations

Partial differential equations are incredibly powerful modelling tools. They are used to mathematically describe phenomenological behaviour. For example, how heat and mass is transferred, how medicine traverses the human body and permeates organs/muscle tissue differently, the motion of fluid, general physics simulation in game design, chemical reaction and interactions, etc.

Partial differential equations (PDEs) form a class of equations which extend those of ordinary differential equations to a number of dimensions. In this course we will focus on PDEs with one spatial dimension and one temporal dimension, with the dependent variable typically denoted by \(u(x,t)\), where \(x\) is the spatial variable and \(t\) represents time. We shall also restrict our analysis to PDEs on a regular domain, \(x \in [a,b]\), however much research has been conducted to solving PDEs on arbitrary domains/geometries. A partial derivative of our dependent variable \(u\) towards \(x\) is written \(\frac{\partial u}{\partial x}\) or by making use of the short hand \(u_{x}\)

4.1 Classification of Partial Differential Equations

We may classify a PDE given the general form of a two dimensional, second-order PDE \[\begin{equation} a u_{xx} + b u_{xy} + c u_{yy} + d u_x + eu_y + fu+g=0, \end{equation}\] by the discriminant

\(b^2 - 4ac > 0\) : hyperbolic,
\(b^2 - 4ac = 0\) : parabolic,
\(b^2 - 4ac < 0\) : elliptic.

Unfortunately, not all PDEs fit nicely into one of these categories, if the coefficients are variable then the PDE may exhibit properties from different classifications under varying conditions. Each of these categories are associated with typical behaviour, summarised

  1. Hyperbolic: Time dependent PDEs which exhibit propagational behaviour, like wave motion, and do not tend toward a steady state. (Wave Equation, Advection Equation, etc) \[u_{tt} = \omega^2 u_{xx}\]
  2. Parabolic: Time dependent PDES which exhibit dissipative behaviour, like heat diffusion, and do tend toward a steady state. (Heat, diffusion Equation, Reaction-Diffusion Equation, etc) \[u_t = D u_{xx}\]
  3. Elliptic: Steady PDEs (not time dependent) that are in a state of equilibrium. (Laplace’s equation, Poisson’s equation, etc) \[u_{xx} +u_{yy} = 0\]

This course primarily focuses on the parabolic type of equations and are typically the easiest to solve and analyse.

4.2 Time-Dependent Problems

Solution approaches to time-dependent problems are often hinged on the domain of the prescribed problem. If the problem is prescribed on a infinite (or semi-infinite) domain then transform and analytic methods are appropriate, or the infinite domain needs to be approximated by a large finite domain to render the problem amenable to numerical methods. As suggested by the domain truncation; problems on bounded domains are well suited to be solved by numerical methods.

Figure 4.1 below illustrates the region in which we want to solve our problem.

Initial-boundary value problem domain for a time-dependent PDE in one spatial dimension.

Figure 4.1: Initial-boundary value problem domain for a time-dependent PDE in one spatial dimension.

Consider the heat equation on this domain. The problem is prescribed by \[\begin{equation} u_t = D u_{xx},\tag{4.1} \end{equation}\] subject to the initial condition \[\begin{equation} u(x,0) = f(x), \end{equation}\] and Dirichlet boundary conditions, \[\begin{equation} u(a,t) = \alpha, \qquad u(b,t) = \beta,\tag{4.2} \end{equation}\] or the Neumann boundary conditions, \[\begin{equation} u_x(a,t) = \alpha, \qquad u_x(b,t) = \beta.\tag{4.3} \end{equation}\] Our approach to numerically solving this equation is to begin by discretising the problem region into a discrete grid, initialising our solution from the initial condition, and then computing the solution at the next time step (row) through an update rule prescribed by the equation and boundary conditions. The boundary conditions (Dirichlet vs Neumann) will be discussed in depth in Section 4.3.2.

4.3 Fully Discrete Methods

Analytic solutions for nonlinear partial differential equations are often difficult or impossible to compute. Numerical methods, such as the Finite Difference Method, provide a robust framework for the approximation of the solution of a PDE. We leverage analysis of the resulting numerical schemes to confirm the properties of the obtained approximate solution; such as convergence rate (accuracy), consistency and stability.

4.3.1 Explicit Finite Difference Method

The explicit finite difference method is a quick and easy approach to solving parabolic PDEs. It is usually a good starting point for getting an idea of the behaviour of a problem, however it suffers from strict stability criteria and can sometimes be unconditionally unstable (meaning no parameter choice will lead to a stable scheme).

Example 4.1 For the moment we consider the heat equation (4.1). The domain \([x,t] = [a,b]\times[0,\infty]\) is discretised into \(N+1\) spatial points, \[\begin{equation} x = [x_0=a,x_1=a+\Delta x, \ldots,x_{N-1} = a+(N-1)\Delta x, x_N = b], \end{equation}\] and temporal discretisation \[\begin{equation} t = [t_0=0,t_1=\Delta t, \ldots,t_{n-1} = (n-1)\Delta t, t_n = n\Delta t, \ldots ]. \end{equation}\]

The spatial discretisation for \(N=4\) is illustrated in Figure 4.2 below.

Schematic Spatial Discretisation for $N = 4$.

Figure 4.2: Schematic Spatial Discretisation for \(N = 4\).

The entire discrete problem may be visualised as a grid, as in Figure 4.3

Schematic Discretisation for $N = 8$. Blue nodes indicate values known from the initial condition. Red nodes indicate values known from boundary values. Purple nodes indicate values known from both initial and boundary conditions (and should be in agreement with one another).

Figure 4.3: Schematic Discretisation for \(N = 8\). Blue nodes indicate values known from the initial condition. Red nodes indicate values known from boundary values. Purple nodes indicate values known from both initial and boundary conditions (and should be in agreement with one another).

In order to discretise the differential operators we consider the Taylor expansion \[\begin{equation} u(x,t+\Delta t) = u(x,t) + \frac{\Delta t}{2}u_t(x,t) + \mathcal{O}(\Delta t^2), \end{equation}\] As a shorthand we denote the discrete point of the solution \(u(x_i,t_n)\) by \(u_i^n\) and note that \(i\) represents a spatial index and \(n\) a temporal index. Hence the temporal derivative is approximated by

\[\begin{equation} u_t(x_i,t_n) = \frac{u(x_i,t_{n+1}) - u(x_i,t_n)}{\Delta t} + \mathcal{O}(\Delta t)= \frac{u_i^{n+1} - u_i^n}{\Delta t} + \mathcal{O}(\Delta t).\tag{4.4} \end{equation}\] Similarly the spatial derivative may be approximated by \[\begin{equation} u_{xx}(x_i,t_n) = \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2} + \mathcal{O}(\Delta x^2).\tag{4.5} \end{equation}\]

Discretising equation (4.1) via equations (4.4) and (4.5) we obtain the following recursive relation (difference equation), \[\begin{equation} \frac{u_i^{n+1} - u_i^n}{\Delta t} = D \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2} + \mathcal{O}(\Delta t)+ \mathcal{O}(\Delta x^2). \end{equation}\] Rearranging and dropping the truncation errors, we have, \[\begin{equation} u_i^{n+1} = u_i^n + \frac{D \Delta t}{\Delta x^2} \left(u_{i+1}^n - 2u_i^n + u_{i-1}^n\right), \quad i = 1,\ldots,n-1.\tag{4.6} \end{equation}\]

From the above we note that, in order to compute the value at the next time step, \(n+1\), for spatial location \(i\) we require points \(i-1\), \(i\) and \(i+1\) all from the known time step \(n\). Figures 4.4 and 4.5 below illustrate different stencils for three different schemes. The explicit stencil requires only explicitly known values to iterate the recursive relation defined above. The implicit and Crank-Nicolson stencil requires three unknown points to calculate one value at the next time step, essentially we have one equation and three unknowns. How we deal with this is discussed later in Section 4.3.3.

The local truncation error, derived from Taylor’s series, for this scheme is \(\mathcal{O}(\Delta t)+ \mathcal{O}(\Delta x^2)\), meaning the scheme is first order accurate in time and second order accurate in space.

We note that equation (4.6) is only valid for \(i=1,\ldots, n-1\), what happens when \(i=0\) or \(i=n\)? Boundary conditions from the prescribed problem factor in and allow for the computation at each boundary.

Explicit Finite Difference Stencil.

Figure 4.4: Explicit Finite Difference Stencil.

Implicit Finite Difference Stencil.

Figure 4.5: Implicit Finite Difference Stencil.

4.3.2 Boundary Conditions

Boundary conditions form an important part of the problem description, not only to render the problem numerically tractable, but also as a modelling device. We consider two classes of boundary conditions in this section.

4.3.2.1 Dirichlet Boundary Conditions

Dirichlet boundary conditions, sometimes referred to as clamped boundary conditions, prescribe a constant value or time dependent function at the boundary. This means that the value of our solution at \(x=a\) or \(x=b\) is known for all values of time, \(t\). Continuing on from the explicit formulation in the previous section, we now see how to incorporate Dirichlet boundary conditions into our numerical scheme.

Regardless of the domain \([a,b]\), the boundary conditions are imposed on the numerical scheme at \(x_0\) and \(x_N\) (red nodes in Figure 4.3. According to equation (4.2) we have,

\[\begin{equation} u(a,t) = \alpha, \qquad u(b,t) = \beta, \end{equation}\] which is discretised as \[\begin{equation} u_0 = \alpha, \qquad u_N = \beta. \end{equation}\] These values may now be incorporated into the update scheme, equation (4.6), when \(i=1\) and \(i=n-1\) due to the width of the stencil.

Now that we have a mechanism to update all values in our discretisation, we can formulate the scheme in vector form. Let \[\begin{equation} \underline{u}^n = \begin{pmatrix} u_0^n\\ u_1^n\\ \vdots\\ u_N^n \end{pmatrix}. \end{equation}\]

By grouping like terms in equation (4.6) we may rewrite that scheme in matrix form, \[\begin{equation} \underline{u}^{n+1} = {\bf A} \underline{u}^n, \end{equation}\] where \[\begin{equation} {\bf A} = \begin{pmatrix} 1 & 0 & 0 & 0 & \ldots & 0 \\ \Lambda_1 & \Lambda_2 & \Lambda_1 & 0 & \ldots & 0 \\ 0 & \Lambda_1 & \Lambda_2 & \Lambda_1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & \Lambda_1 & \Lambda_2 & \Lambda_1 & 0 \\ 0 & \ldots & 0 & \Lambda_1 & \Lambda_2 & \Lambda_1 \\ 0 & \ldots & 0 & 0 & 0 & 1 \end{pmatrix}, \end{equation}\] and \[\begin{align} \Lambda_1 &= \frac{D \Delta t}{\Delta x^2},\\ \Lambda_2 &= 1- \frac{2 D \Delta t}{\Delta x^2}. \end{align}\] Notice that the top and bottom rows enforce the boundary values.

Exercise 4.1 Verify by hand for \(N=4\) that the above matrix equation indeed recovers the correct equations.
Example 4.2 Using a forward time central space finite difference scheme on the heat equation given by: \[ \dfrac{\partial u}{\partial t} = \dfrac{\partial^2 u}{\partial x^2}, \] where: \[\begin{eqnarray*} &u(x, 0) = \sin(\pi x), \quad u(0, t) = 0, \quad u(1, t) = 0, \\ &\quad \Delta x = 0.05, \quad \Delta t = 0.0013 \quad \text{and} \quad N = 50. \end{eqnarray*}\]
we obtain the following numerical solution plotted below:

4.3.2.2 Neumann Boundary Conditions

Neumann boundary conditions, prescribe a constant value or time dependent function for the derivative of the dependent variable at the boundaries. Often a modelling choice is to require that no heat/mass leave the system and so heat/mass at the boundary will bounce back into the problem domain, this is known as zero-flux boundary conditions, mathematically represented by the derivative of the function at the boundaries being 0 for all time, \(t\).

Implementation of derivative boundary conditions is slightly more complicated. Consider the boundary conditions (equation (4.3)),

\[\begin{equation} u_x(a,t) = \alpha, \qquad u_x(b,t) = \beta. \end{equation}\] We may discretise the boundary conditions using a central finite difference formula \[\begin{equation} u_x(x_i,t_n) \approx \frac{u_{i+1} - u_{i-1}}{2 \Delta x}, \end{equation}\] evaluating the above at the boundary \(x=a\) or \(i=0\), we have, \[\begin{equation} \frac{u_{1} - u_{-1}}{2 \Delta x} = \alpha. \end{equation}\] We have introduced a ‘ghost’ point at \(i=-1\), \[\begin{equation} u_{-1} = u_{1} - 2 \Delta x \alpha. \end{equation}\] Similarly for \(x=b\) or \(i=N\), \[\begin{equation} \frac{u_{N+1} - u_{N-1}}{2 \Delta x} = \beta. \end{equation}\] Again we have a `ghost’ point at \(i=N+1\), \[\begin{equation} u_{N+1} = u_{N-1} + 2 \Delta x \beta. \end{equation}\]

Using these ghost points we may now evaluate our update scheme (equation (4.6)) at points \(i=0\) (which requires \(u_{-1}^n\)) and \(i=N\) (which requires \(u_{N+1}^n\)).

We may again formulate the problem in matrix form, taking into account the boundary conditions. In the case of Neumann boundary conditions we have, \[\begin{equation} \underline{u}^{n+1} = {\bf B} \underline{u}^n + \underline{c}, \end{equation}\] where

\[\begin{equation} {\bf B} = \begin{pmatrix} \Lambda_2 & 2\Lambda_1& 0 & 0 & \ldots & 0 \\ \Lambda_1 & \Lambda_2 & \Lambda_1 & 0 & \ldots & 0 \\ 0 & \Lambda_1 & \Lambda_2 & \Lambda_1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & \Lambda_1 & \Lambda_2 & \Lambda_1 & 0 \\ 0 & \ldots & 0 & \Lambda_1 & \Lambda_2 & \Lambda_1 \\ 0 & \ldots & 0 & 0 & 2\Lambda_1& \Lambda_2 \end{pmatrix}, \end{equation}\] and \[\begin{equation} \underline{c} = \begin{pmatrix} -2 \Lambda_1 \Delta x \alpha\\ 0\\ \vdots\\ 0\\ 2 \Lambda_1 \Delta x \beta \end{pmatrix}, \end{equation}\] and \[\begin{align} \Lambda_1 &= \frac{D \Delta t}{\Delta x^2},\\ \Lambda_2 &= 1- \frac{2 D \Delta t}{\Delta x^2}. \end{align}\]
Exercise 4.2 Again, verify by hand for \(N=4\) that the above matrix equation indeed recovers the correct equations.
Example 4.3 Using a forward time central space finite difference scheme on the heat equation given by: \[ \dfrac{\partial u}{\partial t} = \dfrac{\partial^2 u}{\partial x^2}, \] where: \[\begin{eqnarray*} &u(x, 0) = x(1 - x)(x - \frac{1}{2}), \ \ u_x(0, t) = -\frac{1}{2}, \\ & u_x(1, t) = -\frac{1}{2}, \ \Delta x = 0.05, \quad \Delta t = 0.0013 \ \ \text{and} \ \ N = 50. \end{eqnarray*}\]
we obtain the following numerical solution plotted below:

4.3.3 Implicit Finite Difference Method

As alluded to briefly in the previous section, explicit methods often suffer from strict stability conditions. This usually means that \(\Delta x\) and \(\Delta t\) need to be smaller than we might like them to be, leading to slow computation of our numerical simulation. Implicit methods are computationally more expensive, since we have to solve matrix inverses, but generally have much more appealing stability properties. Unconditionally stable implicit methods allow us to choose \(\Delta x\) and \(\Delta t\) however we like, which means we can dictate the spatial resolution we seek (\(\Delta x\)) while taking as large a time step as we like (\(\Delta t\)).

We construct an implicit finite difference scheme by simply evaluating \(u_{xx}\) at the implicit time step. Concretely we have,

\[\begin{equation} u_t(x_i,t_n) = \frac{u_i^{n+1} - u_i^n}{\Delta t} + \mathcal{O}(\Delta t),\tag{4.7} \end{equation}\] as before. However the spatial derivative is now approximated by \[\begin{equation} u_{xx}(x_i,t_n) = \frac{u_{i+1}^{n+1} - 2u_i^{n+1} + u_{i-1}^{n+1}}{\Delta x^2} + \mathcal{O}(\Delta x^2).\tag{4.8} \end{equation}\] As previously mentioned and implied by the implicit stencil, we now have three unknown variables (\(u_{i+1}^{n+1}\), \(u_{i}^{n+1}\) and \(u_{i-1}^{n+1}\)) for each spatial point. However, once the boundary values are incorporated the entire row, \(n+1\), can be computed at once. Using the above discretisation we can write our numerical scheme as \[\begin{equation} \frac{u_i^{n+1} - u_i^n}{\Delta t} = D \frac{u_{i+1}^{n+1} - 2u_i^{n+1} + u_{i-1}^{n+1}}{\Delta x^2}. \end{equation}\] Rearranging and collecting terms we have \[\begin{equation} -\frac{D \Delta t}{\Delta x^2} u_{i+1}^{n+1}+ (1+\frac{2D \Delta t}{\Delta x^2})u_i^{n+1} - \frac{D \Delta t}{\Delta x^2} u_{i-1}^{n+1} = u_i^n, \end{equation}\] which, in matrix form, is given by \[\begin{equation} {\bf C} \underline{u}^{n+1} = \underline{u}^n,\tag{4.9} \end{equation}\] where (for Dirichlet boundary conditions)

\[\begin{equation} {\bf C} = \begin{pmatrix} 1 & 0 & 0 & 0 & \ldots & 0 \\ -\Lambda_1 & \Lambda_3 &-\Lambda_1 & 0 & \ldots & 0 \\ 0 &-\Lambda_1 & \Lambda_3 &-\Lambda_1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots &-\Lambda_1 & \Lambda_3 &-\Lambda_1 & 0 \\ 0 & \ldots & 0 &-\Lambda_1 & \Lambda_3 &-\Lambda_1 \\ 0 & \ldots & 0 & 0 & 0 & 1 \end{pmatrix}, \end{equation}\] and \[\begin{align} \Lambda_1 &= \frac{D \Delta t}{\Delta x^2},\\ \Lambda_3 &= 1+ \frac{2 D \Delta t}{\Delta x^2}. \end{align}\]

Note that equation (4.9) is in the form of \({\bf A} \underline{x} = \underline{b}\), which we can solve using previously learned techniques.

Exercise 4.3 Verify (by hand with \(N=4\)) that the above matrix equation correctly recovers the implicit finite difference scheme.
Exercise 4.4 Construct the matrix equation for Neumann boundary conditions for the implicit finite difference, and verify (by hand with \(N=4\)) that the matrix equations correctly recover the numerical scheme.

4.4 Consistency, Stability and Convergence

This section briefly discusses the three main pillars of numerical analysis. More formal descriptions of these concepts will be given but we outline the ideas here. Consistency is the idea that the numerical scheme that we construct will recover the governing {} as our discretisation parameters (\(\Delta x\) and \(\Delta t\)) are limited to 0, in essence confirming that our numerical scheme is approximating the correct equation. In a similar vein, but more difficult to prove, convergence is the idea that as our discretisation parameters tend to 0, the {} obtained by our numerical approximation tends to the true (potentially unknown) solution. Finally, stability of a numerical scheme entails an analysis of how errors introduced by our approximation grow/shrink/change as our simulation is computed. Although these are three distinct ideas, they are related.

4.4.1 Consistency

In order to prove consistency of a proposed numerical scheme, we are required to show that as our discretisation parameters tend to 0, we recover the original equation. We typically make use of the Taylor series (in reverse) to show this. Consistency is illustrate by way of example.

Example 4.4 Show that, \[\begin{equation} u_i^{n+1} = \frac{D \Delta t}{\Delta x^2} u_{i+1}^n + u_i^n\left(1-\frac{2 D \Delta t}{\Delta x^2} + \Delta t -\Delta t u_i^n\right) + \frac{D \Delta t}{\Delta x^2} u_{i-1}^n , \end{equation}\] is consistent with \[\begin{equation} u_t = D u_{xx} + u(1-u). \end{equation}\] Rearranging our numerical scheme we have, \[\begin{equation} u_i^{n+1} = u_i^n + \frac{D \Delta t}{\Delta x^2} \left(u_{i+1}^n - 2 u_i^n + u_{i-1}^n\right) +\Delta t u_i^n\left(1-u_i^n\right) \end{equation}\] \[\begin{equation} \frac{u_i^{n+1} - u_i^n}{\Delta t} = D \frac{u_{i+1}^n - 2 u_i^n + u_{i-1}^n}{\Delta x^2} + u_i^n\left(1-u_i^n\right). \end{equation}\] Applying limits, \[\begin{equation} \lim\limits_{\Delta x, \Delta t \rightarrow 0}\frac{u_i^{n+1} - u_i^n}{\Delta t} = \lim\limits_{\Delta x, \Delta t \rightarrow 0} D \frac{u_{i+1}^n - 2 u_i^n + u_{i-1}^n}{\Delta x^2} +\lim\limits_{\Delta x, \Delta t \rightarrow 0} u_i^n\left(1-u_i^n\right), \end{equation}\] which recovers, \[\begin{equation} u_t = D u_{xx} + u(1-u), \end{equation}\] from the Taylor series.
Exercise 4.5 Repeat the above example with an implicit finite difference approximation.

4.4.2 von Neumann Stability Analysis

In order to conduct a von Neumann stability analysis, we insert a `trial’ solution into our numerical scheme and assess how that trial solution behaves with respect to the discretisation parameters. In truth what we are doing, is constructing a single (general) Fourier mode and proving that if any general Fourier mode is bounded above then they all are and the numerical scheme is stable.

Example 4.5 We again use the explicit finite difference approximation to the heat equation as an example of how to conduct a von Neumann Stability analysis. Consider \[\begin{equation} u_i^{n+1} = u_i^n + \frac{D \Delta t}{\Delta x^2} \left(u_{i+1}^n - 2u_i^n + u_{i-1}^n\right).\tag{4.10} \end{equation}\] We assume an Ansatz in the form of a Fourier mode \[\begin{equation} u_i^n = \zeta^n e^{I q i \Delta x}, \end{equation}\] where \(q\) is a real spatial wave number of the Fourier series, \(I = \sqrt{-1}\) and \(i\) and \(n\) are the usual indices. Substituting this Ansatz into our numerical scheme we get \[\begin{equation} \zeta^{n+1} e^{I q i \Delta x} = \zeta^n e^{I q i \Delta x} + \frac{D \Delta t}{\Delta x^2} \zeta^n \left(e^{I q i \Delta x} e^{I q \Delta x} - 2 e^{I q i \Delta x} + e^{I q i \Delta x}e^{-I q \Delta x}\right), \end{equation}\] dividing through by \(\zeta^n e^{I q i \Delta x}\) we get \[\begin{equation} \zeta = 1 + \frac{2 D \Delta t}{\Delta x^2} \left(\frac{e^{I q \Delta x}+e^{-I q \Delta x}}{2} - 1\right). \end{equation}\] Using Euler’s formula and the double angle formula for \(\cos(\theta)\) we get, \[\begin{equation} \zeta = 1 - \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right). \end{equation}\] For our scheme to be stable, we require the amplification factor \(|\zeta| < 1\). We should rather phrase this as, “What conditions need to be placed on \(\Delta x\) and \(\Delta t\) so that \(|\zeta| < 1\)?” \[\begin{align} |\zeta| < 1 &\Rightarrow |1 - \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right)| <1, \\ &\Rightarrow -1 < 1 - \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right) < 1, \\ &\Rightarrow -2 < - \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right) < 0, \\ &\Rightarrow 0 < \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right) < 2. \end{align}\] Since \(\Delta t\), \(\Delta x\) and \(\sin^2(\left(\frac{q \Delta x}{2}\right))\) are always positive the left hand inequality is always satisfied. Hence, our stability is conditioned on \[ \frac{4 D \Delta t}{\Delta x^2} \sin^2 \left(\frac{q \Delta x}{2}\right) < 2, \] but since \(0\leq\sin^2 \left(\frac{q \Delta x}{2}\right)\leq 1\), then \[\begin{align} \frac{4 D \Delta t}{\Delta x^2} &< 2,\\ \Rightarrow \frac{D \Delta t}{\Delta x^2} &< \frac{1}{2}. \end{align}\] or in terms of \(\Delta t\), \[\begin{equation} \Delta t < \frac{\Delta x^2}{2D} \end{equation}\]

Example 4.6 In order to illustrate the stabilty concerns with the explicit finite difference scheme, consider the following example: \[\begin{eqnarray*} & \dfrac{\partial u}{\partial t} = \dfrac{\partial^2 u}{\partial x^2}, \\ & u(0, t) = u(1, t) = 0, \quad u(x, 0) = \sin^2(2\pi x), \quad \Delta x = 0.1. \end{eqnarray*}\] Now applying our finite difference scheme, we consider two sets of parameters. In the first plot, we see a stable solution where \(\Delta t = 0.004\). We can see this stability from the von Neumann analysis above, since \(\Delta t < \frac{\Delta x^2}{2D} = \frac{0.1^2}{2} = 0.005\), which is true.

In our second plot, we have \(\Delta t = 0.0053\) and resultant unstable solution since how choice of \(\Delta t\) exceeds the maximum stability above obtained above.

This shows that controlling the error magnification for practical step sizes is a crucial aspect of obtaining an efficient solution.

4.4.3 Lax Equivalence Theorem

Theorem 4.1 (Lax Equivalence Theorem) If we have a consistent approximation to a well-posed linear initial-value problem, then the approximation is convergent if and only if it is stable.

4.4.3.1 Tutorial

  1. Prove that the functions (a) \(u(x, t) = e^{2t + x} + e^{2t - x}\), (b) \(u(x, t) = e^{2t + x}\) are solutions of the heat equation \(u_t = 2u_{xx}\) with the specified boundary conditions: \[ \left\{ \begin{array} { l } { u ( x , 0 ) = e ^ { x } \text { for } 0 \leq x \leq 1 } \\ { u ( 0 , t ) = e ^ { 2 t } \text { for } 0 \leq t \leq 1 } \\ { u ( 1 , t ) = e ^ { 2 t + 1 } \text { for } 0 \leq t \leq 1 } \end{array} \right. \]
  2. Prove that if \(f(x)\) is a degree 3 polynomial, then \(u(x, t) = f(x) + ct f^{\prime\prime}(x)\) is a solution of the initial value problem \(u_t = cu_{xx},\ u(x, 0) = f(x)\).
  3. True or False: For solving a time-dependent partial differential equation, a finite difference method that is both consistent and stable converges to the true solution as the step sizes in time and space go to zero.
  4. Use step sizes of (a) \(\Delta x = 0.1\) and \(\Delta t = 0.0005\) and (b) \(\Delta x = 0.1\) and \(\Delta t = 0.01\) to approximate the solution to the heat equation: \[ u_t = u_{xx}, \ 0 < x < 1,\ t \geq 0, \] with boundary conditions: \[ u(0, t) = u(1, t) = 0, \ t > 0, \] and initial conditions: \[ u(x, 0) = \sin(\pi x), \quad 0 \leq x \leq 1. \] Compare the results at \(t = 0.5\) to the exact solution: \[ u(x, t) = e^{-\pi^2 t}\sin(\pi x). \]
  5. Approximate the solution to the following partial differential equation using the backward difference method: \[ u_t = u_{xx}, \ 0 < x < 2,\ t \geq 0, \] \[ u(0, t) = u(2, t) = 0, \ t > 0, \] \[ u(x, 0) = \sin(\frac{\pi}{2} x), \quad 0 \leq x \leq 2. \] Use \(m = 4, T =0.1\), and \(N = 2\) and compare your results to the actual solution \(u(x, t) = e^{-(\pi/4)t}\sin\frac{\pi}{2}x\).
  6. Approximate the solution to the following partial differential equation using the backward difference method: \[ u_t = \dfrac{1}{16}u_{xx}, \ 0 < x < 1,\ t \geq 0, \] \[ u(0, t) = u(1, t) = 0, \ t > 0, \] \[ u(x, 0) = 2\sin(2\pi x), \quad 0 \leq x \leq 1. \] Use \(m = 3, T =0.1\), and \(N = 2\) and compare your results to the actual solution \(u(x, t) = 2e^{-(\pi^2/4)t}\sin 2 x\).
  7. Using the code you have written in Lab 4, apply your functions on the PDEs given in questions 5 and 6. Explore various step sizes. Compare your results against the actual solutions.